Kinetic Modeling of Data Eviction in Cache
نویسندگان
چکیده
The reuse distance (LRU stack distance) is an essential metric for performance prediction and optimization of storage and CPU cache. Over the last four decades, there have been steady improvements in the algorithmic efficiency of reuse distance measurement. This progress is accelerating in recent years both in theory and practical implementation. In this paper, we present a kinetic model of LRU cache memory, based on the average eviction time (AET) of the cached data. The AET model enables fast measurement and low-cost sampling. It can produce the miss ratio curve (MRC) in linear time with extremely low space costs. On both CPU and storage benchmarks, AET reduces the time and space costs compare to former techniques. Furthermore, AET is a composable model that can characterize shared cache behavior through modeling individual programs.
منابع مشابه
Eviction-based Cache Placement for Storage Caches
Most previous work on buffer cache management uses an access-based placement policy that places a data block into a buffer cache at the block’s access time. This paper presents an eviction-based placement policy for a storage cache that usually sits in the lower level of a multi-level buffer cache hierarchy and thereby has different access patterns from upper levels. The main idea of the evicti...
متن کاملA Per-File Partitioned Page Cache
In this paper we describe a new design of the operating system page cache. Page caches form an important part of the memory hierarchy and are used to access file-system data. In most operating systems, there exists a single page cache whose contents are replaced according to a LRU eviction policy. We design and implement a page cache which is partitioned by file—the per-file page cache. The per...
متن کاملThe CLOCK Data-Aware Eviction Approach: Towards Processing Linked Data Streams with Limited Resources
Processing streams rather than static files of Linked Data has gained increasing importance in the web of data. When processing data streams system builders are faced with the conundrum of guaranteeing a constant maximum response time with limited resources and, possibly, no prior information on the data arrival frequency. One approach to address this issue is to delete data from a cache during...
متن کاملPaging for Multicore (CMP) Caches
In the last few years, multicore processors have become the dominant processor architec-ture. While cache eviction policies have been widely studied both in theory and practice forsequential processors, in the case in which various simultaneous processes use a shared cachethe performance of even the most common eviction policies is not yet fully understood, nor dowe know if curr...
متن کاملCracking Intel Sandy Bridge's Cache Hash Function
On Intel Sandy Bridge processor, last level cache (LLC) is divided into cache slices and all physical addresses are distributed across the cache slices using an hash function. With this undocumented hash function existing, it is impossible to implement cache partition based on page coloring. This article cracks the hash functions on two types of Intel Sandy processors by converting the problem ...
متن کامل